21 research outputs found

    Étude des sous-graphes communs des Graphes de Dépendance d’Appels Systèmes pour la classification de logiciels malveillants

    Get PDF
    International audienceDistinguishing legitimate software from malicious software is a problem that requires a lot of expertise. In order to create a malware detection software, an approach consists in extracting System Call Dependency Graphs (SCDG) which summarize the behavior of a software. Once SCDGs are extracted, a learning phase identifies sub-graphs characteristic of malicious behaviors. In order to classify the graph of an unknown binary, we look whether it contains such sub-graphs. These techniques proved to be efficient, but no analysis of the sub-graphs extracted during the learning phase has been conducted so far. We study the sub-graphs we find and showcase preprocessing steps on the graphs in order to improve the learning and classification performance. The approach has been applied on graphs extracted from Mirai. Mirai is a malware which created a large botnet in 2016 to perform distributed deny of service attacks. We show that the preprocessing step tremendously improve the speed of the learning and classification

    A Benchmarks Library for Extended Parametric Timed Automata

    Full text link
    Parametric timed automata are a powerful formalism for reasoning on concurrent real-time systems with unknown or uncertain timing constants. In order to test the efficiency of new algorithms, a fair set of benchmarks is required. We present an extension of the IMITATOR benchmarks library, that accumulated over the years a number of case studies from academic and industrial contexts. We extend here the library with several dozens of new benchmarks; these benchmarks highlight several new features: liveness properties, extensions of (parametric) timed automata (including stopwatches or multi-rate clocks), and unsolvable toy benchmarks. These latter additions help to emphasize the limits of state-of-the-art parameter synthesis techniques, with the hope to develop new dedicated algorithms in the future.Comment: This is the author (and extended) version of the manuscript of the same name published in the proceedings of the 15th International Conference on Tests and Proofs (TAP 2021

    Configuring Timing Parameters to Ensure Execution-Time Opacity in Timed Automata

    Full text link
    Timing information leakage occurs whenever an attacker successfully deduces confidential internal information by observing some timed information such as events with timestamps. Timed automata are an extension of finite-state automata with a set of clocks evolving linearly and that can be tested or reset, making this formalism able to reason on systems involving concurrency and timing constraints. In this paper, we summarize a recent line of works using timed automata as the input formalism, in which we assume that the attacker has access (only) to the system execution time. First, we address the following execution-time opacity problem: given a timed system modeled by a timed automaton, given a secret location and a final location, synthesize the execution times from the initial location to the final location for which one cannot deduce whether the secret location was visited. This means that for any such execution time, the system is opaque: either the final location is not reachable, or it is reachable with that execution time for both a run visiting and a run not visiting the secret location. We also address the full execution-time opacity problem, asking whether the system is opaque for all execution times; we also study a weak counterpart. Second, we add timing parameters, which are a way to configure a system: we identify a subclass of parametric timed automata with some decidability results. In addition, we devise a semi-algorithm for synthesizing timing parameter valuations guaranteeing that the resulting system is opaque. Third, we report on problems when the secret has itself an expiration date, thus defining expiring execution-time opacity problems. We finally show that our method can also apply to program analysis with configurable internal timings.Comment: In Proceedings TiCSA 2023, arXiv:2310.18720. This invited paper mainly summarizes results on opacity from two recent works published in ToSEM (2022) and at ICECCS 2023, providing unified notations and concept names for the sake of consistency. In addition, we prove a few original results absent from these work

    Désobscurcissement de prédicats opaques

    No full text
    L’obfuscation de code est aujourd’hui utilisée comme une méthode permettant de protéger un logiciel, notamment face à des enjeux de propriété intellectuelle. En effet, il permet d’empêcher – ou de rendre plus difficile – des analyses par rétro-ingénierie tout en conservant un comportement identique. L’obfuscation pose donc un problème scientifique dès lors qu’il est question de détecter de tels programmes : sans palier à cette étape de leur conception, il est impossible de déterminer si un logiciel est malveillant. Ce travail de recherche s’est alors intéressé à permettre la déobfuscation d’un certain type d’obfuscation, dit par « prédicat opaque », en s’interdisant de se concentrer sur des modèles spécifiques de constructions. Les contributions mises en avant sont donc : (i) une analyse de l’état actuel des méthodes d’obfuscation et de déobfuscation par la réalisation d’une analyse bibliographique poussée et (ii) la proposition d’une méthode de déobfuscation, statique et basée sur une classification par machine learning, de prédicats opaques variants (two-ways opaque predicates)

    Étude des sous-graphes communs des Graphes de Dépendance d’Appels Systèmes pour la classification de logiciels malveillants

    Get PDF
    International audienceDistinguishing legitimate software from malicious software is a problem that requires a lot of expertise. In order to create a malware detection software, an approach consists in extracting System Call Dependency Graphs (SCDG) which summarize the behavior of a software. Once SCDGs are extracted, a learning phase identifies sub-graphs characteristic of malicious behaviors. In order to classify the graph of an unknown binary, we look whether it contains such sub-graphs. These techniques proved to be efficient, but no analysis of the sub-graphs extracted during the learning phase has been conducted so far. We study the sub-graphs we find and showcase preprocessing steps on the graphs in order to improve the learning and classification performance. The approach has been applied on graphs extracted from Mirai. Mirai is a malware which created a large botnet in 2016 to perform distributed deny of service attacks. We show that the preprocessing step tremendously improve the speed of the learning and classification

    Expiring opacity problems in parametric timed automata

    No full text
    Information leakage can have dramatic consequences on the security of real-time systems. Timing leaks occur when an attacker is able to infer private behavior depending on timing information. In this work, we propose a definition of expiring timed opacity w.r.t. execution time, where a system is opaque whenever the attacker is unable to deduce the reachability of some private state solely based on the execution time; in addition, the secrecy is violated only when the private state was visited "recently", i. e., within a given time bound (or expiration date) prior to system completion. This has an interesting parallel with concrete applications, notably cache deducibility: it may be useless for the attacker to know the cache content too late after its observance. We study here expiring timed opacity problems in timed automata. We consider the set of time bounds (or expiration dates) for which a system is opaque and show when they can be effectively computed for timed automata. We then study the decidability of several parameterized problems, when not only the bounds, but also some internal timing constants become timing parameters of unknown constant values

    Expiring opacity problems in parametric timed automata

    No full text
    Information leakage can have dramatic consequences on the security of real-time systems. Timing leaks occur when an attacker is able to infer private behavior depending on timing information. In this work, we propose a definition of expiring timed opacity w.r.t. execution time, where a system is opaque whenever the attacker is unable to deduce the reachability of some private state solely based on the execution time; in addition, the secrecy is violated only when the private state was visited "recently", i. e., within a given time bound (or expiration date) prior to system completion. This has an interesting parallel with concrete applications, notably cache deducibility: it may be useless for the attacker to know the cache content too late after its observance. We study here expiring timed opacity problems in timed automata. We consider the set of time bounds (or expiration dates) for which a system is opaque and show when they can be effectively computed for timed automata. We then study the decidability of several parameterized problems, when not only the bounds, but also some internal timing constants become timing parameters of unknown constant values

    strategFTO: Untimed control for timed opacity

    No full text
    This work is partially supported by the ANR-NRF French-Singaporean research program ProMiS (ANR-19-CE25-0015 / 2019 ANR NRF 0092) and the ANR research program BisoUS. Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several universities as well as other organizationsInternational audienceWe introduce a prototype tool strategFTO addressing the verification of a security property in critical software. We consider a recent definition of timed opacity where an attacker aims to deduce some secret while having access only to the total execution time. The system, here modeled by timed automata, is deemed opaque if for any execution time, there are either no corresponding runs, or both public and private corresponding runs. We focus on the untimed control problem: exhibiting a controller, i.e., a set of allowed actions, such that the system restricted to those actions is fully timed-opaque. We first show that this problem is not more complex than the full timed opacity problem, and then we propose an algorithm, implemented and evaluated in practice

    Guaranteeing Timed Opacity using Parametric Timed Model Checking

    No full text
    This is the author version of the manuscript of the same name published in ACM Transactions on Software Engineering and Methodology (ToSEM).International audienceInformation leakage can have dramatic consequences on systems security. Among harmful information leaks, the timing information leakage occurs whenever an attacker successfully deduces confidential internal information. In this work, we consider that the attacker has access (only) to the system execution time. We address the following timed opacity problem: given a timed system, a private location and a final location, synthesize the execution times from the initial location to the final location for which one cannot deduce whether the system went through the private location. We also consider the full timed opacity problem, asking whether the system is opaque for all execution times. We show that these problems are decidable for timed automata (TAs) but become undecidable when one adds parameters, yielding parametric timed automata (PTAs). We identify a subclass with some decidability results. We then devise an algorithm for synthesizing PTAs parameter valuations guaranteeing that the resulting TA is opaque. We finally show that our method can also apply to program analysis
    corecore